Applying Referenceless MT Evaluation Metrics to a Multi-Engine MT System

نویسنده

  • Joshua Albrecht
چکیده

Recent work in the area of automatic machine translation evaluation has led to the development of metrics that have equivalent levels of performance with standard metrics, yet do not rely on the existence of human generated gold standard reference translations. To achieve this performance, alternative machine generated translation are used in place of the human references. A seemingly natural application of such work is to utilize it as part of a multi-engine machine translation system by allowing the evaluation metric to select the best candidate from a list composed of possible sentences from a number of different MT systems. This possibility is explored, and a number of factors affecting the performance of the resulting MEMT systems are identified. No significant gains have yet been made with this method, however.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Engine and Multi-Alignment Based Automatic Post-Editing and its Impact on Translation Productivity

In this paper we combine two strands of machine translation (MT) research: automatic postediting (APE) and multi-engine (system combination) MT. APE systems learn a target-languageside second stage MT system from the data produced by human corrected output of a first stage MT system, to improve the output of the first stage MT in what is essentially a sequential MT system combination architectu...

متن کامل

Learning to Translate with Multiple Objectives

We introduce an approach to optimize a machine translation (MT) system on multiple metrics simultaneously. Different metrics (e.g. BLEU, TER) focus on different aspects of translation quality; our multi-objective approach leverages these diverse aspects to improve overall quality. Our approach is based on the theory of Pareto Optimality. It is simple to implement on top of existing single-objec...

متن کامل

Ground Truth, Reference Truth & “Omniscient Truth” -- Parallel Phrases in Parallel Texts for MT Evaluation

Recently introduced automated methods of evaluating machine translation (MT) systems require the construction of parallel corpora of source language (SL) texts with human reference translations in the target language (TL). We present a novel method of exploiting and augmenting these resources for task-based MT evaluation, assessing how accurately people can extract Who, When, and Where elements...

متن کامل

MATREX: The DCU MT System for WMT 2010

This paper describes the DCU machine translation system in the evaluation campaign of the Joint Fifth Workshop on Statistical Machine Translation and Metrics in ACL-2010. We describe the modular design of our multi-engine machine translation (MT) system with particular focus on the components used in this participation. We participated in the English– Spanish and English–Czech translation tasks...

متن کامل

E-rating Machine Translation

We describe our submissions to the WMT11 shared MT evaluation task: MTeRater and MTeRater-Plus. Both are machine-learned metrics that use features from e-rater R ©, an automated essay scoring engine designed to assess writing proficiency. Despite using only features from e-rater and without comparing to translations, MTeRater achieves a sentencelevel correlation with human rankings equivalent t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007